Home |
| Latest | About | Random
# 31 Determinants: Part 3 In this part I would like to describe some applications of determinants, namely - Invertibility of square matrices - Geometric - Signed areas and volumes of parallelopiped. - Gauss's shoelace formula - Geometric scaling factor by a linear transformation - Algebraic - Cramer's rule -- coefficient-wise solution to an invertible linear system - Adjugate matrix and the inverse matrix formula - Calculus - Jacobian matrix Through it, we will see "where does this determinant naturally arise from", from Cramer's rule. ## Invertibility of square matrices. We saw this already in the previous parts, but I would just like emphasize this primary connection to invertibility of square matrices with determinants: > **Proposition.** > Let $A$ be an $n\times n$ square matrix, then $A$ is invertible if and only if $\det(A)\neq 0$. But can we ascribe some meaning to the **value** and **sign** of the determinant? Yes, in fact determinants computes the signed $n$-dimensional volume of a certain $n$-parallelopiped. ## Geometric connections. ### Signed areas and volumes of a parallelopiped. Suppose we have two vectors $\vec v = \begin{pmatrix}v_{1}\\v_{2}\end{pmatrix}$ and $\vec w = \begin{pmatrix}w_{1}\\w_{2}\end{pmatrix}$ in the Euclidean plane $\mathbb{R}^{2}$. We can form a parallelogram using the origin, $\vec v$, $\vec w$, and the point $\vec v+\vec w$: ![[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 14.28.15.excalidraw.svg]] %%[[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 14.28.15.excalidraw|🖋 Edit in Excalidraw]]%% Then we claim > **Proposition.** If $\vec v = \begin{pmatrix}v_{1}\\v_{2}\end{pmatrix}$ and $\vec w = \begin{pmatrix}w_{1}\\w_{2}\end{pmatrix}$ are two vectors in $\mathbb{R}^{2}$, then the determinant $$ \det(\vec v | \vec w) = \det \begin{pmatrix}v_{1} & w_{1} \\ v_{2} & w_{2}\end{pmatrix} $$is the **signed area of the parallelogram formed by $\vec v$ and $\vec w$**. And the absolute value $|\det(\vec v| \vec w)|$ is just the area of the parallelogram formed by $\vec v$ and $\vec w$. The sign can be determined by "right hand rule", if starting from vector $\vec v$ and we sweep in a counterclockwise direction in the parallelogram to get to $\vec w$, then the sign is positive, otherwise negative. **Example.** Find the area of the parallelogram on the plane with vertices $(4,2), (7,8), (11,11), (8,5)$. ![[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 14.51.40.excalidraw.svg]] %%[[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 14.51.40.excalidraw|🖋 Edit in Excalidraw]]%% $\blacktriangleright$ Let us translate these four vertices and the figure so that one of the vertices is at the origin, say let us translate by $-(4,2)$ : ![[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 15.05.43.excalidraw.svg]] %%[[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 15.05.43.excalidraw|🖋 Edit in Excalidraw]]%% This gives four new vertices $$ (0,0),(3,6),(7,9),(4,3) $$which forms a parallelogram of the same area. This new parallelogram is formed by the vectors $\vec v = \begin{pmatrix}3\\6\end{pmatrix}$ and $\vec w = \begin{pmatrix}4\\3\end{pmatrix}$. So using determinants, we have the area of the parallelogram to be $$ \left|\det\begin{pmatrix}3 & 4 \\ 6 & 3\end{pmatrix}\right| = |9-24| = 15, $$and since the area is the same after we translated, we conclude the area is $15$. $\blacklozenge$ Of course, if we can find parallelogram areas, then we can find area of a triangle. **Example.** Compute the area of a triangle with vertices $(4,4),(5,8),(-2,6)$. ![[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 15.19.50.excalidraw.svg]] %%[[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 15.19.50.excalidraw|🖋 Edit in Excalidraw]]%% $\blacktriangleright$ Let us translate the points and figure so that one of the vertices is at the origin, say we translate by $-(4,4)$, so we get new vertices $(0,0),(1,4),(-6,2)$: ![[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 15.27.09.excalidraw.svg]] %%[[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 15.27.09.excalidraw|🖋 Edit in Excalidraw]]%% And now observe that a triangle has an area that is half a parallelogram. So the area is $$ \frac{1}{2}\left|\det\begin{pmatrix}1 & -6 \\ 4 & 2\end{pmatrix}\right|=\frac{1}{2}|2+24|=13. \quad\blacklozenge $$ **Remark.** If the vectors $\vec v$ and $\vec w$ in $\mathbb{R}^{2}$ are linearly dependent, then the parallelogram they form have no area! This agrees with $\det(\vec v | \vec w)=0$ in this case! This can be generalized to higher dimensions. In general we have > **Proposition.** > Let $\vec v_{1},\vec v_{2},\ldots,\vec v_{n}$ be $n$ vectors in $\mathbb{R}^{n}$. Then $$ \det(\vec v_{1}|\vec v_{2}|\cdots |\vec v_{n}) $$is the signed $n$-dimensional volume of the $n$-dimensional parallelopiped formed by the vectors $\vec v_{1},\vec v_{2},\ldots,\vec v_{n}$. If we take the absolute value, then it is the volume. And it holds that > If $\vec v_{1},\ldots,\vec v_{n}$ are **linearly dependent vectors** in $\mathbb{R}^{n}$, then the parallelopiped they form would have zero $n$-dimensional volume, hence $\det(\vec v_{1}|\cdots|\vec v_{n})=0$. A parallelopiped is a higher dimensional generalization of a parallelogram. For instance, three vectors $\vec v,\vec u, \vec w$ in $\mathbb{R}^{3}$ forms a 3-dimensional parallelopiped that looks like an oblique box: ![[smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 15.44.49.excalidraw.svg]] %%[[smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 15.44.49.excalidraw.md|🖋 Edit in Excalidraw]]%% This box then has 3-dimensional volume $|\det(\vec v | \vec u | \vec w)|$. ### Gauss's shoelace formula. Since we can calculate areas of triangles on the plane, we can extend it to compute the area of polygons, given a list of their vertices. One such formula is given by Gauss. > **Gauss's shoelace formula.** > Suppose a simple (no self-intersection) polygon $P$ in the plane has $n$ vertices given in order $$ P_{1}=(x_{1},y_{1}), P_{2}=(x_{2},y_{2}),\ldots,P_{n}=(x_{n},y_{n}), $$then, if we write $x_{n+1}=x_{1}$ and $y_{n+1}=y_{1}$, we have $$ \frac{1}{2}\sum_{k=1}^{n}\det\begin{pmatrix}x_{k}&x_{k+1}\\y_{k}&y_{k+1}\end{pmatrix} $$gives the **signed area of this polygon**. Note the $2\times 2$ matrix records two adjacent vertices on an edge, so the last point $P_{n}$ connects back to $P_{1}$. If we expand it, then it looks like:$$ \frac{1}{2}\left[\det\begin{pmatrix}x_{1} & x_{2} \\ y_{1} & y_{2}\end{pmatrix} +\det\begin{pmatrix}x_{2} & x_{3} \\ y_{2} & y_{3}\end{pmatrix} +\det\begin{pmatrix}x_{3} & x_{4} \\ y_{3} & y_{4}\end{pmatrix} +\cdots+\det\begin{pmatrix}x_{n-1} & x_{n} \\ y_{n-1} & y_{n}\end{pmatrix} +\det\begin{pmatrix}x_{n} & x_{1} \\ y_{n} & y_{1}\end{pmatrix}\right]. $$ Why is it called Gauss's **shoelace** formula? Well, since each determinant is $2\times 2$, we can calculate it by an "X" pattern. So if we list the coordinates as column vectors in order, $P_{1},P_{2},\ldots,P_{n},P_{1}$, that wraps back to the first one, we have ![[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 16.20.27.excalidraw.svg]] %%[[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 16.20.27.excalidraw|🖋 Edit in Excalidraw]]%% where each pair of blue and red cross is a determinant calculation. **Example.** Find the area of a simple polygon on the plane whose vertices given in order are $$ P_{1} = (1,1), P_{2} = (6,3), P_{3} = (4,8), P_{4} = (-1,3), P_{5} = (-4,7), P_{6} = (-6,4) $$![[smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 16.10.51.excalidraw.svg]] %%[[smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 16.10.51.excalidraw.md|🖋 Edit in Excalidraw]]%% $\blacktriangleright$ Apply the shoelace formula, the signed area is given by $$ \frac{1}{2}(-3 + 36+20+5 +26-10) = \frac{74}{2} = 37. $$So the area is $37$. $\blacklozenge$ ### Geometric scaling factor by a linear transformation. Suppose we have a linear transformation $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$, recall we can express it as a left-matrix multiplication, that there exists an $n\times n$ matrix $A$ such that $$ T(\vec x) = A \vec x. $$ Then $\det(A)$ can be interpreted as follows: > **Proposition.** > Let $T:\mathbb{R}^{n}\to \mathbb{R}^{n}$ be a linear transformation, where $A$ is the standard matrix of $T$, where $T(\vec x) = A \vec x$. Let $\Omega$ be a region in $\mathbb{R}^{n}$ with $n$-dimensional volume $\text{vol}(\Omega)$. The $T(\Omega)$, the image of $\Omega$ under $T$, has $n$-dimensional volume given by $$ \text{vol}(T(\Omega)) = |\det(A)| \text{vol}(\Omega), $$that is, $|\det(A)|$ is the geometric scaling factor of the effects of the linear map $T$. Here's an illustration. Here drawn is some blob $\Omega$ in $\mathbb{R}^{2}$, and its image $T(\Omega)$ under some linear map $T:\mathbb{R}^{2}\to \mathbb{R}^{2}$. Then the area of $T(\Omega)$ is $|\det(A)|$ times the area of $\Omega$: ![[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 18.53.22.excalidraw.svg]] %%[[1 teaching/smc-spring-2024-math-13/linear-algebra-notes/---files/31-determinants-part-3 2024-04-05 18.53.22.excalidraw|🖋 Edit in Excalidraw]]%% (The $2$-volume is area.) **Example.** An ellipse centered at the origin has an equation of the form $$ \frac{x^{2}}{a^{2}} + \frac{y^{2}}{b^{2}}=1 $$with semi-axes $a > 0$ and $b > 0$. What is the area enclosed by an ellipse? Consider the unit disk, $\Omega = \{(x,y):x^{2}+y^{2} \le 1\}$. Now consider the linear transformation $$ T\begin{pmatrix}x\\y\end{pmatrix}=\begin{pmatrix}a & 0 \\ 0 & b\end{pmatrix} \begin{pmatrix}x\\y\end{pmatrix}. $$This linear transformation scales in the $x$-axis by a factor of $a$, and scales in the $y$-axis by a factor of $b$. One can show that image $T(\Omega)$ is precisely the region enclosed by the ellipse $\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}\le 1$. Then, by the proposition, $$ \begin{align*} & \text{area}(T(\Omega)) = |\det(A)|\text{area}(\Omega) \\ \implies & \text{area of ellipse} = |ab|\cdot \text{area of unit circle} \\ \implies & \text{area of ellipse} = ab \pi \end{align*} $$This gives the familiar formula of an ellipse of semiaxes $a > 0$, $b > 0$ to be $\pi a b$ . $\blacklozenge$ **Remark.** The area of an ellipse is easy to write down, however curiously, the **arclength of an ellipse** is not so straightforward! This led to a vast amounts of mathematics in the investigation of "elliptic integrals". ## Algebraic connections. ### Cramer's rule. Let us go back to solving linear systems again. Say we have a $2\times 2$ linear system, two equations and two unknowns $x,y$, say $$ \left\{\begin{array}{} ax + by = r_{1} & ... (R_1)\\ cx + dy = r_{2} & ... (R_2) \end{array}\right. $$Let us assume for a moment that $a,b,c,d$ are not zero and any division process we see will not pose any issues. We can solve for the variables by multiplying $R_{1}$ by $c$, multiply $R_{2}$ by $a$, and eliminate $x$ from the second equation: $$ \left\{ \begin{array}{rll} ac x & + bcy &=& r_{1}c \\ &+(ad-bc)y & =& r_{2}a - r_{1}c \end{array} \right. $$ Here we see that we can solve for $y$ as $$ y = \frac{r_{2}a - r_{1}c}{ad-bc} = \frac{\det\begin{pmatrix}a & r_{1} \\ c & r_{2}\end{pmatrix}}{\det\begin{pmatrix}a&b\\c&d\end{pmatrix}}. $$And similarly, if we multiply $R_{1}$ by $d$ and multiply $R_{2}$ by $b$, and eliminate $y$ instead, we get $$ \left\{ \begin{array}{rll} (ad - bc)x & &=& r_{1}d -r_{2}b \\ bc x & + bdy & =& r_{2}b \end{array} \right. $$which we see $$ x = \frac{r_{1}d-r_{2}b}{ad-bc} = \frac{\det\begin{pmatrix}r_{1} & b\\r_{2} & d\end{pmatrix}}{\det\begin{pmatrix}a&b\\c&d\end{pmatrix}}. $$ What we have just derived is **Cramer's rule** for $2\times 2$ invertible systems: > **Cramer's rule.** (For $2 \times 2$ invertible systems.) > Let $A$ be a $2\times 2$ invertible matrix, and $\vec b$ a vector in $\mathbb R^2$. Then the linear system $$ A \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \vec b $$ can be solved entry by entry, where $$ x_1 = \frac{\det A_1}{\det A}\quad\text{and}\quad x_2 = \frac{\det A_2}{\det A}, $$ where $A_i$ is the matrix obtained by replacing the $i$-th column by $\vec b$. This shows the determinant naturally arises from solving linear system of equations! **Example.** Use Cramer's rule to solve the linear system $$ \begin{align*} 13 x - 5 y & = 9 \\ 7 x - 3 y & = 11 \end{align*} $$ $\blacktriangleright$ Applying Cramer's rule directly, we have $$x = \frac{\det\begin{pmatrix}9 & -5 \\ 11 & -3\end{pmatrix}}{\det\begin{pmatrix}13 & -5 \\ 7 & -3\end{pmatrix}} = \frac{-27+55}{-39+35}= \frac{28}{4} = 7, $$ and $$ y = \frac{\det\begin{pmatrix}13 & 9 \\ 7 & 11\end{pmatrix}}{\det\begin{pmatrix}13 & -5 \\ 7 & -3\end{pmatrix}} = \frac{143-63}{-39+35}= \frac{80}{4} = 20. \quad\blacklozenge $$ This extends to large square systems as well, > **Cramer's rule.** (For $n\times n$ invertible systems.) > Let $A$ be a $n\times n$ invertible matrix, and $\vec b$ a vector in $\mathbb R^n$. Then the linear system $$ A \begin{pmatrix} x_1 \\ x_2 \\\vdots\\x_{n}\end{pmatrix} = \vec b $$ can be solved entry by entry, where for each $i$ we have $$ x_i = \frac{\det A_i}{\det A}, $$ where $A_i$ is the matrix obtained by replacing the $i$-th column by $\vec b$. **Remark.** Now the computation of determinants repeatedly is not necessarily the fastest way to solve linear systems, however it provides a formulaic way to express each unknown entry of the system. ### Adjugate matrix and the matrix inverse formula. For any $n\times n$ square matrix $A$ we can define its **adjugate matrix**[^adj], which is some $n\times n$ matrix that we denote $\text{adj}(A)$, where its entries are given by $$ [\text{adj}(A)]_{ij} = (-1)^{i+j} \det(A^{T}\setminus(i,j)). $$Several things to note: There is the alternating signs, and the $(i,j)$-th entry is computed by using the determinant of $A^{T}\setminus(i,j)$, the determinant of the matrix you get by removing the $i$-th row and $j$-th column of $A^{T}$. One can also equivalently define it as $$ [\text{adj}(A)]_{ij}=(-1)^{i+j}\det (A\setminus(j,i)). $$ **Example.** Let us take the $2\times 2$ matrix $$ A = \begin{pmatrix}1 & 2\\3 & 4 \end{pmatrix}. $$To find its adjugate $\text{adj}(A)$, let us compute each of its four entries: $$ \begin{align*} [\text{adj}(A)]_{11} = (-1)^{1+1} \det(A^{T}\setminus(1,1))=1\cdot \det [4]=4 \\ [\text{adj}(A)]_{12} = (-1)^{1+2} \det(A^{T}\setminus(1,2))=-1\cdot \det [2]=-2\\ [\text{adj}(A)]_{21} = (-1)^{2+1} \det(A^{T}\setminus(2,1))=-1\cdot \det [3]=-3\\ [\text{adj}(A)]_{22} = (-1)^{2+2} \det(A^{T}\setminus(2,2))=1\cdot \det [1]=1 \\ \end{align*} $$(it helps if you also write out $A^{T}$ on the side) so we have $$ \text{adj}(A) = \begin{pmatrix}4 & -2 \\ -3 & 1\end{pmatrix} $$ In fact, here we see that for a $2\times 2$ matrix $A=\begin{pmatrix}a & b \\ c & d\end{pmatrix}$ has adjugate matrix $\text{adj}(A) = \begin{pmatrix}d & -b \\ -c & a\end{pmatrix}$. Doesn't this look familiar! Indeed, this comes from part of the inverse formula for a $2\times 2$ matrix. In general we have the following: > **Proposition.** Let $A$ be any square matrix, then the matrix product$$ A \ \text{adj}(A) =\text{adj}(A)\ A = \det(A) \ I, $$which is scaling the identity matrix by $\det(A)$. In particular, if $\det(A)\neq 0$, then we can derive an inverse matrix formula: > For $A$ an $n\times n$ matrix with $\det(A)\neq0$, we have $$ A^{-1} = \frac{\text{adj}(A)}{\det(A)}. $$ This gives an "inverse matrix formula". What is interesting is that even when $A^{-1}$ does not exist, we can still compute $\text{adj}(A)$ as an $n\times n$ matrix, whose purpose is when we multiply to $A$ we get a scalar multiple of the identity matrix by $\det (A)$. In some sense, $\text{adj}(A)$ is a kind of "pre-inverse" of $A$. Again, $\text{adj}(A)$ can always be computed because its entries just involve determinants of some cofactors, which never uses division! **Remark.** The adjugate matrix has "fallen off" of popular "basic math curriculum", but it is still useful in many higher algebraic concepts -- that there is always a matrix that can multiple to another matrix to turn it into a very regularly defined diagonal matrix. One can in fact prove important theorems like Cayley-Hamilton theorem purely algebraically by using the adjugate matrix. ## Calculus connections and Jacobian matrix. This section is optional, you can read it for your own benefit. ### Jacobian matrix. In calculus of 1-variable we learned that a function $f(x)$ is differentiable at some point $x_{0}$ means the limit $$ \lim_{h\to0} \frac{f(x_{0}+h) - f(x_{0})}{h} = f'(x_{0}) $$exists, and we denote the value of this limit $f'(x_{0})$. We can re-write above as $$ \lim_{h\to 0} \frac{f(x_{0}+h)-f(x_{0})-f'(x_{0})h}{h} = 0 $$ But what does this mean? This means that we can we can approximate $f$ near $x=x_{0}$ by a **line**, in other words, $$ f(x)-f(x_{0}) \approx f'(x_{0}) (x-x_{0}), $$the so-called linear approximation. In fact we can generalize this to higher dimensions. For a function $f:\mathbb{R}^{k}\to \mathbb{R}^{n}$, we say that it is differentiable at $p\in \mathbb{R}^{k}$ if there is a linear transformation $T_{p}:\mathbb{R}^{k}\to \mathbb{R}^{n}$ that well-approximates $f$ near $p$, that is, $$ f(x)-f(p) \approx T_{p}(x-p). $$To make it more precise, it means $T_{p}$ is a linear map such that the limit $$ \lim_{h\to 0} \frac{f(p+h)-f(p)-T(h)}{\Vert h\Vert} = 0. $$ The standard matrix of this linear map $T_{p}$ is called the **Jacobian matrix** of $f$ at $p$, which we can write as $J_{p} = [T_{p}]_{\text{std}}$. And, as it turns out, if such linear map $T_{p}$ exists that well approximates $f$ near $p$, then the Jacobian matrix is given by $$ J_{p} = \begin{pmatrix} \frac{\partial f_{1}}{\partial x_{1}}(p) & \frac{\partial f_{1}}{\partial x_{2}}(p) & \cdots & \frac{\partial f_{1}}{\partial x_{k}}(p) \\ \frac{\partial f_{2}}{\partial x_{1}}(p) & \frac{\partial f_{2}}{\partial x_{2}}(p) & \cdots & \frac{\partial f_{2}}{\partial x_{k}}(p) \\ \vdots \\ \frac{\partial f_{n}}{\partial x_{1}}(p) & \frac{\partial f_{n}}{\partial x_{2}}(p) & \cdots & \frac{\partial f_{n}}{\partial x_{k}}(p) \end{pmatrix} $$where $f_{i}$ is the $i$-th component of the function $f$, where each $f_{i}=f_{i}(x_{1},x_{2},\ldots,x_{k})$, as $f:\mathbb{R}^{k}\to \mathbb{R}^{n}$. The $(i,j)$-th entry of $J_{p}$ is the evaluation of the partial derivative $\frac{\partial f_{i}}{\partial x_{j}}$ at $p$. Bottomline, if $f$ is differentiable at $x=p$, then there is a matrix $J_{p}$ given as above such that near $x=p$, we have $$ f(x)-f(p) \approx J_{p} (x-p). $$ ### Change of variables in integration. Let us consider the integration of some function $f=f(x,y)$ over some region $R$, $$ \iint_{R} f(x,y)dxdy. $$It is sometimes desirable to perform a change of variables, so the either the new region is nicer, or the integrand is nicer. Let us say we have some change of variables $$ \begin{align*} x=g(u,v) \\ y = h(u,v) \end{align*} $$ How should the integral change so that it is now in terms of $u,v$? Well we know that we need to figure out a new region $S$ such that $$ (x,y) \in R \iff (u,v) \in S, $$and because we have transformed the region, there are scaling factors to consider, and indeed it is given by $$ \iint_{S}f(g(u,v),h(u,v)) \left| \frac{\partial(x,y)}{\partial(u,v)} \right| dudv, $$ where $\left| \frac{\partial(x,y)}{\partial(u,v)} \right|$ is a mnemonic device given by $$ \left| \frac{\partial(x,y)}{\partial(u,v)} \right| = \left| \det \begin{pmatrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v}\end{pmatrix} \right|. $$Wait a minute, that matrix is just the **Jacobian matrix** of the linear approximation of this change of variable! And computing the determinant gives the **geometric scaling factor** of this linear transformation. Hence doing this change of variables in the integral we get to keep track of how the change of variable affect the integral geometrically! --- [^adj]: In older literature it is called "adjoint" but this is highly unfavorable, because the word adjoint in math refers to another much more important context.